Explainable image classification with evidence counterfactual

نویسندگان

چکیده

Abstract The complexity of state-of-the-art modeling techniques for image classification impedes the ability to explain model predictions in an interpretable way. A counterfactual explanation highlights parts which, when removed, would change predicted class. Both legal scholars and data scientists are increasingly turning explanations as these provide a high degree human interpretability, reveal what minimal information needs be changed order come different prediction do not require disclosed. Our literature review shows that existing methods have strong requirements regarding access training internals, which often unrealistic. Therefore, SEDC is introduced model-agnostic instance-level method does need data. As tasks typically multiclass problems, additional contribution introduction SEDC-T allows specifying target These experimentally tested on ImageNet data, with concrete examples, we illustrate how resulting can give insights decisions. Moreover, benchmarked against methods, demonstrating stability results, computational efficiency nature explanations.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sample-oriented Domain Adaptation for Image Classification

Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it. The conventional image processing algorithms cannot perform well in scenarios where the training images (source domain) that are used to learn the model have a different distribution with test images (target domain). Also, many real world applicat...

متن کامل

Explainable Planning

As AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater responsibility to such systems. The chal...

متن کامل

Explainable Entity-based Recommendations with Knowledge Graphs

Explainable recommendation is an important task. Many methods have been proposed which generate explanations from the content and reviews written for items. When review text is unavailable, generating explanations is still a hard problem. In this paper, we illustrate how explanations can be generated in such a scenario by leveraging external knowledge in the form of knowledge graphs. Our method...

متن کامل

Removing Erasures with Explainable Hash Proof Systems

An important problem in secure multi-party computation is the design of protocols that can tolerate adversaries that are capable of corrupting parties dynamically and learning their internal states. In this paper, we make significant progress in this area in the context of password-authenticated key exchange (PAKE) and oblivious transfer (OT) protocols. More precisely, we first revisit the noti...

متن کامل

Visually Explainable Recommendation

Images account for a significant part of user decisions in many application scenarios, such as product images in e-commerce, or user image posts in social networks. It is intuitive that user preferences on the visual patterns of image (e.g., hue, texture, color, etc) can be highly personalized, and this provides us with highly discriminative features to make personalized recommendations. Previo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Pattern Analysis and Applications

سال: 2022

ISSN: ['1433-755X', '1433-7541']

DOI: https://doi.org/10.1007/s10044-021-01055-y